ABSTRACT
OBJECTIVE: To compare patient responses to validated satisfaction surveys for in-person vs virtual otolaryngology ambulatory evaluation. METHODS: National Research Corporation (NRC) Health patient survey answers between April 2020 and February 2021 were divided into in-person and virtual visit modalities. Responses were compared with two group t-tests or Wilcoxon rank sum tests. Relationships between visit modality by gender, age, race, and sub-specialty visit type and satisfaction scores were examined by testing interactions with separate ANOVA models. RESULTS: 1242 in-person and 216 virtual patient satisfaction survey responses were highly favorable for all themes (communication, comprehension of treatment plan, and likelihood of future referral) with both visit modalities. Higher satisfaction for in-person evaluation was seen with communication ("care providers listened" 3.68 (0.67)-on a scale of 1-no to 4-yes, definitely) vs 3.57 (0.78), p = 0.0426; "courtesy/respect" 3.75 (0.62) vs 3.66 (0.69), p = 0.0265)), and comprehension of treatment plan ("enough info about treatment" 3.53 (0.79) vs 3.37 (0.92), p = 0.0120; "know what to do" 3.62 (0.76) vs 3.46 (0.88), p = 0.0023)). No differences were detected for future referral of clinic or provider. There was no association between visit modality and patient sociodemographic factors or sub-specialty visit types. Main effects were observed with respect to race, gender, and sub-specialty visit type. CONCLUSION: Patient satisfaction scores for virtual visit evaluation were high and comparable to in-person evaluation, with a slight preference for in-person. Future studies are needed to identify which patients and conditions are particularly suited for virtual vs in-person delivery of otolaryngology services.
Subject(s)
Otolaryngology , Ambulatory Care Facilities , Humans , Otolaryngology/methods , Patient Satisfaction , Referral and Consultation , Surveys and QuestionnairesABSTRACT
BACKGROUND: Web-based health interventions are increasingly common and are promising for patients with voice disorders because web-based participation does not require voice use. To address needs such as Health Insurance Portability and Accountability Act compliance, unique user access, the ability to send automated reminders, and a limited development budget, we used the Research Electronic Data Capture (REDCap) data management platform to deliver a patient-facing psychological intervention designed for patients with voice disorders. This was a novel use of REDCap. OBJECTIVE: We aimed to evaluate the usability of the intervention, with this intervention serving as a use case for REDCap-based patient-facing interventions. METHODS: We used REDCap survey instruments to develop the web-based voice intervention modules, then conducted usability evaluations using (1) heuristic evaluations by 2 evaluators, and (2) formal usability testing with 7 participants, consisting of predetermined tasks, a think-aloud protocol, ease-of-use measurements, a product reaction card, and a debriefing interview. RESULTS: Heuristic evaluations found strengths in visibility of system status and real-world match, and weaknesses in user control and help documentation. Based on this feedback, changes to the intervention were made before usability testing. Overall, usability testing participants found the intervention useful and easy to use, although testing revealed some concerns with design, content, and terminology. Some concerns were readily addressed, and others required adaptations within REDCap. CONCLUSIONS: The REDCap version of a complex web-based patient-facing intervention performed well in heuristic evaluation and formal usability testing. REDCap can effectively be used for patient-facing intervention delivery, particularly if the limitations of the platform are anticipated and mitigated.